63 research outputs found

    Design of asynchronous supervisors

    Full text link
    One of the main drawbacks while implementing the interaction between a plant and a supervisor, synthesised by the supervisory control theory of \citeauthor{RW:1987}, is the inexact synchronisation. \citeauthor{balemiphdt} was the first to consider this problem, and the solutions given in his PhD thesis were in the domain of automata theory. Our goal is to address the issue of inexact synchronisation in a process algebra setting, because we get concepts like modularity and abstraction for free, which are useful to further analyze the synthesised system. In this paper, we propose four methods to check a closed loop system in an asynchronous setting such that it is branching bisimilar to the modified (asynchronous) closed loop system. We modify a given closed loop system by introducing buffers either in the plant models, the supervisor models, or the output channels of both supervisor and plant models, or in the input channels of both supervisor and plant models. A notion of desynchronisable closed loop system is introduced, which is a class of synchronous closed loop systems such that they are branching bisimilar to their corresponding asynchronous versions. Finally we study different case studies in an asynchronous setting and then try to summarise the observations (or conditions) which will be helpful in order to formulate a theory of desynchronisable closed loop systems

    Quantitative Safety: Linking Proof-Based Verification with Model Checking for Probabilistic Systems

    Full text link
    This paper presents a novel approach for augmenting proof-based verification with performance-style analysis of the kind employed in state-of-the-art model checking tools for probabilistic systems. Quantitative safety properties usually specified as probabilistic system invariants and modeled in proof-based environments are evaluated using bounded model checking techniques. Our specific contributions include the statement of a theorem that is central to model checking safety properties of proof-based systems, the establishment of a procedure; and its full implementation in a prototype system (YAGA) which readily transforms a probabilistic model specified in a proof-based environment to its equivalent verifiable PRISM model equipped with reward structures. The reward structures capture the exact interpretation of the probabilistic invariants and can reveal succinct information about the model during experimental investigations. Finally, we demonstrate the novelty of the technique on a probabilistic library case study

    Markovian Testing Equivalence and Exponentially Timed Internal Actions

    Full text link
    In the theory of testing for Markovian processes developed so far, exponentially timed internal actions are not admitted within processes. When present, these actions cannot be abstracted away, because their execution takes a nonzero amount of time and hence can be observed. On the other hand, they must be carefully taken into account, in order not to equate processes that are distinguishable from a timing viewpoint. In this paper, we recast the definition of Markovian testing equivalence in the framework of a Markovian process calculus including exponentially timed internal actions. Then, we show that the resulting behavioral equivalence is a congruence, has a sound and complete axiomatization, has a modal logic characterization, and can be decided in polynomial time

    An interview study about the use of logs in embedded software engineering

    Get PDF
    Context: Execution logs capture the run-time behavior of software systems. To assist developers in their maintenance tasks, many studies have proposed tools to analyze execution information from logs. However, it is as yet unknown how industry developers use logs in embedded software engineering. Objective: In this study, we aim to understand how developers use logs in an embedded software engineering context. Specifically, we would like to gain insights into the type of logs developers analyze, the purposes for which developers analyze logs, the information developers need from logs and their expectation on tool support. Method: In order to achieve the aim, we conducted these interview studies. First, we interviewed 25 software developers from ASML, which is a leading company in developing lithography machines. This exploratory case study provides the preliminary findings. Next, we validated and refined our findings by conducting a replication study. We involved 14 interviewees from four companies who have different software engineering roles in their daily work. Results: As the result of our first study, we compile a preliminary taxonomy which consists of four types of logs used by developers in practice, 18 purposes of using logs, 13 types of information developers search in logs, 13 challenges faced by developers in log analysis and three suggestions for tool support provided by developers. This taxonomy is refined in the replication study with three additional purposes, one additional information need, four additional challenges and three additional suggestions of tool support. In addition, with these two studies, we observed that text-based editors and self-made scripts are commonly used when it comes to tooling in log analysis practice. As indicated by the interviewees, the development of automatic analysis tools is hindered by the quality of the logs, which further suggests several challenges in log instrumentation and management. Conclusions: Based on our study, we provide suggestions for practitioners on logging practices. We provide implications for tool builders on how to further improve tools based on existing techniques. Finally, we suggest some research directions and studies for researchers to further study software logging.</p

    The Hazard Value: A Quantitative Network Connectivity Measure Accounting for Failures

    Get PDF
    International audienceTo meet their stringent requirements in terms of performance and dependability, communication networks should be "well connected". While classic connectivity measures typically revolve around topological properties, e.g., related to cuts, these measures may not reflect well the degree to which a network is actually dependable. We introduce a more refined measure for network connectivity, the hazard value, which is developed to meet the needs of a real network operator. It accounts for crucial aspects affecting the dependability experienced in practice, including actual traffic patterns, distribution of failure probabilities, routing constraints, and alternatives for services with preferences therein. We analytically show that the hazard value fulfills several fundamental desirable properties that make it suitable for comparing different network topologies with one another, and for reasoning about how to efficiently enhance the robustness of a given network. We also present an optimised algorithm to compute the hazard value and an experimental evaluation against networks from the Internet Topology Zoo and classical datacenter topologies, such as fat trees and BCubes. This evaluation shows that the algorithm computes the hazard value within minutes for realistic networks, making it practically usable for network designers

    Verifying Real-Time Systems using Explicit-time Description Methods

    Get PDF
    Timed model checking has been extensively researched in recent years. Many new formalisms with time extensions and tools based on them have been presented. On the other hand, Explicit-Time Description Methods aim to verify real-time systems with general untimed model checkers. Lamport presented an explicit-time description method using a clock-ticking process (Tick) to simulate the passage of time together with a group of global variables for time requirements. This paper proposes a new explicit-time description method with no reliance on global variables. Instead, it uses rendezvous synchronization steps between the Tick process and each system process to simulate time. This new method achieves better modularity and facilitates usage of more complex timing constraints. The two explicit-time description methods are implemented in DIVINE, a well-known distributed-memory model checker. Preliminary experiment results show that our new method, with better modularity, is comparable to Lamport's method with respect to time and memory efficiency

    Modelling Clock Synchronization in the Chess gMAC WSN Protocol

    Get PDF
    We present a detailled timed automata model of the clock synchronization algorithm that is currently being used in a wireless sensor network (WSN) that has been developed by the Dutch company Chess. Using the Uppaal model checker, we establish that in certain cases a static, fully synchronized network may eventually become unsynchronized if the current algorithm is used, even in a setting with infinitesimal clock drifts

    The effects of once- versus twice-weekly sessions on psychotherapy outcomes in depressed patients

    Get PDF
    Background It is unclear what session frequency is most effective in cognitive-behavioural therapy (CBT) and interpersonal psychotherapy (IPT) for depression.Aims Compare the effects of once weekly and twice weekly sessions of CBT and IPT for depression.Method We conducted a multicentre randomised trial from November 2014 through December 2017. We recruited 200 adults with depression across nine specialised mental health centres in the Netherlands. This study used a 2 × 2 factorial design, randomising patients to once or twice weekly sessions of CBT or IPT over 16-24 weeks, up to a maximum of 20 sessions. Main outcome measures were depression severity, measured with the Beck Depression Inventory-II at baseline, before session 1, and 2 weeks, 1, 2, 3, 4, 5 and 6 months after start of the intervention. Intention-to-treat analyses were conducted.Results Compared with patients who received weekly sessions, patients who received twice weekly sessions showed a statistically significant decrease in depressive symptoms (estimated mean difference between weekly and twice weekly sessions at month 6: 3.85 points, difference in effect size d = 0.55), lower attrition rates (n = 16 compared with n = 32) and an increased rate of response (hazard ratio 1.48, 95% CI 1.00-2.18).Conclusions In clinical practice settings, delivery of twice weekly sessions of CBT and IPT for depression is a way to improve depression treatment outcomes

    Cost-effectiveness of opportunistic screening and minimal contact psychotherapy to prevent depression in primary care patients

    Get PDF
    Background: Depression causes a large burden of disease worldwide. Effective prevention has the potential to reduce that burden considerably. This study aimed to investigate the cost-effectiveness of minimal contact psychotherapy, based on Lewinsohn's 'Coping with depression' course, targeted at opportunistically screened individuals with sub-threshold depression. Methods and Results: Using a Markov model, future health effects and costs of an intervention scenario and a current practice scenario were estimated. The time horizon was five years. Incremental cost-effectiveness ratios were expressed in euro per Disability Adjusted Life Year (DALY) averted. Probabilistic sensitivity analysis was employed to study the effect of uncertainty in the model parameters. From the health care perspective the incremental cost-effectiveness ratio was € 1,400 per DALY, and from the societal perspective the intervention was cost-saving. Although the estimated incremental costs and effects were surrounded with large uncertainty, given a willingness to pay of € 20,000 per DALY, the probability that the intervention is cost-effective was around 80%. Conclusion: This modelling study showed that opportunistic screening in primary care for sub-threshold depression in combination with minimal contact psychotherapy may be cost-effective in the prevention of major depression

    Cognitive Behavioral Therapy versus Short Psychodynamic Supportive Psychotherapy in the outpatient treatment of depression: a randomized controlled trial

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Previous research has shown that Short Psychodynamic Supportive Psychotherapy (SPSP) is an effective alternative to pharmacotherapy and combined treatment (SPSP and pharmacotherapy) in the treatment of depressed outpatients. The question remains, however, how Short Psychodynamic Supportive Psychotherapy compares with other established psychotherapy methods. The present study compares Short Psychodynamic Supportive Psychotherapy to the evidence-based Cognitive Behavioral Therapy in terms of acceptability, feasibility, and efficacy in the outpatient treatment of depression. Moreover, this study aims to identify clinical predictors that can distinguish patients who may benefit from either of these treatments in particular. This article outlines the study protocol. The results of the study, which is being currently carried out, will be presented as soon as they are available.</p> <p>Methods/Design</p> <p>Adult outpatients with a main diagnosis of major depressive disorder or depressive disorder not otherwise specified according to DSM-IV criteria and mild to severe depressive symptoms (<it>Hamilton Depression Rating Scale </it>score ≥ 14) are randomly allocated to Short Psychodynamic Supportive Psychotherapy or Cognitive Behavioral Therapy. Both treatments are individual psychotherapies consisting of 16 sessions within 22 weeks. Assessments take place at baseline (week 0), during the treatment period (week 5 and 10) and at treatment termination (week 22). In addition, a follow-up assessment takes place one year after treatment start (week 52). Primary outcome measures are the number of patients refusing treatment (acceptability); the number of patients terminating treatment prematurely (feasibility); and the severity of depressive symptoms (efficacy) according to an independent rater, the clinician and the patient. Secondary outcome measures include general psychopathology, general psychotherapy outcome, pain, health-related quality of life, and cost-effectiveness. Clinical predictors of treatment outcome include demographic variables, psychiatric symptoms, cognitive and psychological patient characteristics and the quality of the therapeutic relationship.</p> <p>Discussion</p> <p>This study evaluates Short Psychodynamic Supportive Psychotherapy as a treatment for depressed outpatients by comparing it to the established evidence-based treatment Cognitive Behavioral Therapy. Specific strengths of this study include its strong external validity and the clinical relevance of its research aims. Limitations of the study are discussed.</p> <p>Trial registration</p> <p>Current Controlled Trails ISRCTN31263312</p
    • …
    corecore